患者调度是一项艰巨的任务,因为它涉及处理随机因素,例如患者未知的到达流动。调度癌症患者的放射治疗治疗面临着类似的问题。治疗患者需要在推荐的最后期限内开始治疗,即入院后14或28天,而在入院后1至3天内需要迫切治疗的姑息治疗的治疗能力。大多数癌症中心通过保留用于急诊患者的固定数量的治疗槽来解决问题。然而,这种平面预留方法并不理想,并且可能在某些日子里造成急诊患者的过期治疗,同时在其他几天内没有充分利用治疗能力,这也导致治疗患者的延迟治疗。这个问题在大型和拥挤的医院中特别严重。在本文中,我们提出了一种基于预测的在线动态放射治疗调度方法。一个离线问题,其中提前已知所有未来的患者到达,以使用整数编程来解决。然后培训回归模型以识别患者到达模式之间的链接及其理想的等待时间。然后,培训的回归模型以基于预测的方法嵌入,该方法根据其特征和日历的当前状态来调度患者。数值结果表明,我们的预测方法有效地防止了应急患者的过度处理,同时与基于平面预留政策的其他调度方法相比保持良好的等待时间。
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
The unit commitment (UC) problem, which determines operating schedules of generation units to meet demand, is a fundamental task in power systems operation. Existing UC methods using mixed-integer programming are not well-suited to highly stochastic systems. Approaches which more rigorously account for uncertainty could yield large reductions in operating costs by reducing spinning reserve requirements; operating power stations at higher efficiencies; and integrating greater volumes of variable renewables. A promising approach to solving the UC problem is reinforcement learning (RL), a methodology for optimal decision-making which has been used to conquer long-standing grand challenges in artificial intelligence. This thesis explores the application of RL to the UC problem and addresses challenges including robustness under uncertainty; generalisability across multiple problem instances; and scaling to larger power systems than previously studied. To tackle these issues, we develop guided tree search, a novel methodology combining model-free RL and model-based planning. The UC problem is formalised as a Markov decision process and we develop an open-source environment based on real data from Great Britain's power system to train RL agents. In problems of up to 100 generators, guided tree search is shown to be competitive with deterministic UC methods, reducing operating costs by up to 1.4\%. An advantage of RL is that the framework can be easily extended to incorporate considerations important to power systems operators such as robustness to generator failure, wind curtailment or carbon prices. When generator outages are considered, guided tree search saves over 2\% in operating costs as compared with methods using conventional $N-x$ reserve criteria.
translated by 谷歌翻译
Domain adaptation has been vastly investigated in computer vision but still requires access to target images at train time, which might be intractable in some conditions, especially for long-tail samples. In this paper, we propose the task of `Prompt-driven Zero-shot Domain Adaptation', where we adapt a model trained on a source domain using only a general textual description of the target domain, i.e., a prompt. First, we leverage a pretrained contrastive vision-language model (CLIP) to optimize affine transformations of source features, bringing them closer to target text embeddings, while preserving their content and semantics. Second, we show that augmented features can be used to perform zero-shot domain adaptation for semantic segmentation. Experiments demonstrate that our method significantly outperforms CLIP-based style transfer baselines on several datasets for the downstream task at hand. Our prompt-driven approach even outperforms one-shot unsupervised domain adaptation on some datasets, and gives comparable results on others. The code is available at https://github.com/astra-vision/PODA.
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
肥胖是现代社会的严重问题,因为它与生活质量大大降低了。目前进行的研究是为了探索使用脑电图(EEG)数据探索与肥胖相关的神经学证据。在这项研究中,我们开发了一种新型的机器学习模型,以使用来自EEG数据的Alpha带功能连接功能来鉴定肥胖女性的大脑网络。总体分类精度达到90%。我们的发现表明,肥胖的大脑的特征是功能失调的网络,在该网络中,负责处理自指信息(例如能量需求)的领域受到损害。
translated by 谷歌翻译
本文介绍了Cerberus机器人系统系统,该系统赢得了DARPA Subterranean挑战最终活动。出席机器人自主权。由于其几何复杂性,降解的感知条件以及缺乏GPS支持,严峻的导航条件和拒绝通信,地下设置使自动操作变得特别要求。为了应对这一挑战,我们开发了Cerberus系统,该系统利用了腿部和飞行机器人的协同作用,再加上可靠的控制,尤其是为了克服危险的地形,多模式和多机器人感知,以在传感器退化,以及在传感器退化的条件下进行映射以及映射通过统一的探索路径计划和本地运动计划,反映机器人特定限制的弹性自主权。 Cerberus基于其探索各种地下环境及其高级指挥和控制的能力,表现出有效的探索,对感兴趣的对象的可靠检测以及准确的映射。在本文中,我们报告了DARPA地下挑战赛的初步奔跑和最终奖项的结果,并讨论了为社区带来利益的教训所面临的亮点和挑战。
translated by 谷歌翻译
密切的人类机器人互动(HRI),尤其是在工业场景中,已经对结合人类和机器人技能的优势进行了广泛的研究。对于有效的HRI,应质疑当前可用的人机通信媒体或工具的有效性,并应探讨新的交流方式。本文提出了一个模块化体系结构,允许人类操作员通过不同的方式与机器人互动。特别是,我们使用智能手表和平板电脑分别实施了架构来分别处理手势和触摸屏输入。最后,我们在这两种方式之间进行了比较用户体验研究。
translated by 谷歌翻译
尽管近期因因果推断领域的进展,迄今为止没有关于从观察数据的收集治疗效应估算的方法。对临床实践的结果是,当缺乏随机试验的结果时,没有指导在真实情景中似乎有效的指导。本文提出了一种务实的方法,以获得从观察性研究的治疗效果的初步但稳健地估算,为前线临床医生提供对其治疗策略的信心程度。我们的研究设计适用于一个公开问题,估算Covid-19密集护理患者的拳击机动的治疗效果。
translated by 谷歌翻译
我们为生成对抗网络(GAN)提出了一个新颖的理论框架。我们揭示了先前分析的基本缺陷,通过错误地对GANS的训练计划进行了错误的建模,该缺陷受到定义不定的鉴别梯度的约束。我们克服了这个问题,该问题阻碍了对GAN培训的原则研究,并考虑了歧视者的体系结构在我们的框架内解决它。为此,我们通过其神经切线核为歧视者提供了无限宽度神经网络的理论。我们表征了训练有素的判别器,以实现广泛的损失,并建立网络的一般可怜性属性。由此,我们获得了有关生成分布的融合的新见解,从而促进了我们对GANS训练动态的理解。我们通过基于我们的框架的分析工具包来证实这些结果,并揭示了与GAN实践一致的直觉。
translated by 谷歌翻译